Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers
نویسندگان
چکیده
منابع مشابه
Supervised Neural Gas for Learning Vector Quantization
In this contribution we combine approaches the generalized leraning vector quantization (GLVQ) with the neighborhood orientented learning in the neural gas network (NG). In this way we obtain a supervised version of the NG what we call supervised NG (SNG). We show that the SNG is more robust than the GLVQ because the neighborhood learning avoids numerically instabilities as it may occur for com...
متن کاملSupervised Neural Gas and Relevance Learning in Learning Vector Quantization
Learning vector quantization (LVQ) as proposed by Kohonen is a simple and intuitive, though very successful prototype—based clustering algorithm.Generalized relevance LVQ (GRLVQ) constitutes a modification which obeys the dynamics of a gradient descent and allows an adaptive metric utilizing relevance factors for the input dimensions. As iterative algorithms with local learning rules, LVQ and m...
متن کاملA Neural Network Approach to Similarity Learning
This paper presents a novel neural network model, called similarity neural network (SNN), designed to learn similarity measures for pairs of patterns. The model guarantees to compute a non negative and symmetric measure, and shows good generalization capabilities even if a very small set of supervised examples is used for training. Preliminary experiments, carried out on some UCI datasets, are ...
متن کاملAn axiomatic approach to soft learning vector quantization and clustering
This paper presents an axiomatic approach to soft learning vector quantization (LVQ) and clustering based on reformulation. The reformulation of the fuzzy c-means (FCM) algorithm provides the basis for reformulating entropy-constrained fuzzy clustering (ECFC) algorithms. This analysis indicates that minimization of admissible reformulation functions using gradient descent leads to a broad varie...
متن کاملGeneralized Learning Vector Quantization
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i04.6038